The No Free Lunch Theorems for Optimisation: An Overview
نویسنده
چکیده
Many algorithms have been devised for tackling combinatorial optimisation problems (COPs). Traditional Operations Research (OR) techniques such as Branch and Bound and Cutting Planes Algorithms can, given enough time, guarantee an optimal solution as they explicitly exploit features of the optimisation function they are solving. Specialised heuristics exist for most COPs that also exploit features of the optimisation function to arrive at a good, but probably not optimal, solution. There also exist a number of metaheuristics, algorithms that describe how to search the solution space without being tied to any one problem type. Some of these may be tailored to make them suit a particular problem better. However, there are a number of so-called “black-box” optimisation algorithms, which use little or no problem-specific information. Key examples of the black-box approach are Evolutionary Algorithms (EAs) and Simulated Annealing (SA). Random search is also a black-box optimisation algorithm, and so represents an important benchmark against which the performance of other search algorithms may be measured. Given an optimisation problem (i.e. objective function) f and an algorithm a, it is important to have some measure of how well a performs on f . Moreover, given empirical evidence of a’s performance on f , is it possible to make generalisations about a’s performance on other functions, either of the same or different type as f? Intuition would have one believe that there are some algorithms that will perform better than others on average. However, the No Free Lunch (NFL) Theorems state that such an assertion cannot be made. That is, across all optimisation functions, the average performance of all algorithms is the same. This means that if an algorithm performs well on one set of problems then it will perform poorly (worse than random search) on all others. To put the NFL theorems in their proper context, it is important to understand what they are and are not saying. From [2], p.69:
منابع مشابه
Arbitrary function optimisation with metaheuristics - No free lunch and real-world problems
No free lunch theorems for optimisation suggest that empirical studies on benchmarking problems are pointless, or even cast negative doubts, when algorithms are being applied to other problems not clearly related to the previous ones. Roughly speaking, reported empirical results are not just the result of algorithms’ performances, but the benchmark used therein as well; and consequently, recomm...
متن کاملOptimization, block designs and No Free Lunch theorems
We study the precise conditions under which all optimisation strategies for a given family of finite functions yield the same expected maximisation performance, when averaged over a uniform distribution of the functions. In the case of bounded-length searches in a family of Boolean functions, we provide tight connections between such “No Free Lunch” conditions and the structure of t-designs and...
متن کاملThe Supervised Learning No-Free-Lunch Theorems
This paper reviews the supervised learning versions of the no-free-lunch theorems in a simpli ed form. It also discusses the signi cance of those theorems, and their relation to other aspects of supervised learning.
متن کاملNo Free Lunch and Free Leftovers Theorems for Multiobjective Optimisation Problems
The classic NFL theorems are invariably cast in terms of single objective optimization problems. We confirm that the classic NFL theorem holds for general multiobjective fitness spaces, and show how this follows from a ‘single-objective’ NFL theorem. We also show that, given any particular Pareto Front, an NFL theorem holds for the set of all multiobjective problems which have that Pareto Front...
متن کاملNo-Free-Lunch theorems in the continuum
No-Free-Lunch Theorems state, roughly speaking, that the performance of all search algorithms is the same when averaged over all possible objective functions. This fact was precisely formulated for the first time in a now famous paper by Wolpert and Macready, and then subsequently refined and extended by several authors, always in the context of a set of functions with discrete domain and codom...
متن کامل